14 research outputs found

    Le rĂŽle fondamental de l’observation directe dans la dĂ©cision de confier une responsabilitĂ© professionnelle

    Get PDF
    Background: Entrustment decisions may be retrospective (based on past experiences with a trainee) or real-time (based on direct observation). We investigated judgments of entrustment based on assessor prior knowledge of candidates and based on systematic direct observation, conducted in an objective structured clinical exam (OSCE). Methods: Sixteen faculty examiners provided 287 retrospective and real-time entrustment ratings of 16 cardiology trainees during OSCE stations in 2019 and 2020. Reliability and validity of these ratings were assessed by comparing correlations across stations as a measure of reliability, differences across postgraduate years as an index of construct validity, correlation to standardized in-training exam (ITE) as a measure of criterion validity, and reclassification of entrustment as a measure of consequential validity. Results: Both retrospective and real-time assessments were highly reliable (all intra-class correlations >0.86). Both increased with year of postgraduate training. Real-time entrustment ratings were significantly correlated with standardized ITE scores; retrospective ratings were not. Real-time ratings explained 37% (2019) and 46% (2020) of variance in examination scores vs. 21% (2019) and 7% (2020) for retrospective ratings. Direct observation resulted in a different level of entrustment compared with retrospective ratings in 44% of cases (p = <0.001). Conclusions: Ratings based on direct observation made unique contributions to entrustment decisions.Contexte : La dĂ©cision de confier une activitĂ© peut ĂȘtre rĂ©trospective (basĂ©e sur les expĂ©riences antĂ©rieures avec un apprenant) ou en temps rĂ©el (basĂ©e sur l’observation directe). Nous avons Ă©tudiĂ© les Ă©valuations de niveaux de confiance fondĂ©es sur des interactions antĂ©rieures des candidats par les Ă©valuateurs et celles fondĂ©es sur l’observation directe systĂ©matique, dans le cadre d’un examen clinique objectif structurĂ© (ECOS). MĂ©thodes : Seize Ă©valuateurs du corps professoral ont fourni 287 Ă©valuations rĂ©trospectives et en temps rĂ©el du niveau de confiance faites lors des stations d’ECOS en 2019 et 2020 concernant 16 stagiaires en cardiologie. La fiabilitĂ© et la validitĂ© de ces Ă©valuations ont Ă©tĂ© analysĂ©es en comparant les corrĂ©lations entre les stations comme mesure de la fiabilitĂ©, les diffĂ©rences entre les annĂ©es d’études postdoctorales comme indice de la validitĂ© de construit, la corrĂ©lation avec l’examen normalisĂ© en cours de formation (ITE) comme mesure de la validitĂ© de critĂšre, et le reclassement des Ă©valuations de la confiance comme mesure de la validitĂ© corrĂ©lative. RĂ©sultats : Les Ă©valuations rĂ©trospectives et en temps rĂ©el Ă©taient toutes les deux trĂšs fiables (toutes les corrĂ©lations intra-classes >0,86). Les deux augmentaient avec le niveau de formation postdoctorale. Les Ă©valuations de la confiance en temps rĂ©el Ă©taient significativement corrĂ©lĂ©es aux scores de l’examen normalisĂ© en cours de formation; les Ă©valuations rĂ©trospectives ne l’étaient pas. Les Ă©valuations en temps rĂ©el expliquaient 37 % (2019) et 46 % (2020) de la variance des notes d’examen, contre 21 % (2019) et 7 % (2020) pour les Ă©valuations rĂ©trospectives. L’observation directe a permis de reclasser 44 % des Ă©valuations rĂ©trospectives de la confiance (p=<0,001 dans les deux cas). Conclusion : Les Ă©valuations basĂ©es sur l’observation directe contribuent de façon importante Ă  la dĂ©cision de confier une activitĂ©

    Enriched biodiversity data as a resource and service

    Get PDF
    Background: Recent years have seen a surge in projects that produce large volumes of structured, machine-readable biodiversity data. To make these data amenable to processing by generic, open source “data enrichment” workflows, they are increasingly being represented in a variety of standards-compliant interchange formats. Here, we report on an initiative in which software developers and taxonomists came together to address the challenges and highlight the opportunities in the enrichment of such biodiversity data by engaging in intensive, collaborative software development: The Biodiversity Data Enrichment Hackathon. Results: The hackathon brought together 37 participants (including developers and taxonomists, i.e. scientific professionals that gather, identify, name and classify species) from 10 countries: Belgium, Bulgaria, Canada, Finland, Germany, Italy, the Netherlands, New Zealand, the UK, and the US. The participants brought expertise in processing structured data, text mining, development of ontologies, digital identification keys, geographic information systems, niche modeling, natural language processing, provenance annotation, semantic integration, taxonomic name resolution, web service interfaces, workflow tools and visualisation. Most use cases and exemplar data were provided by taxonomists. One goal of the meeting was to facilitate re-use and enhancement of biodiversity knowledge by a broad range of stakeholders, such as taxonomists, systematists, ecologists, niche modelers, informaticians and ontologists. The suggested use cases resulted in nine breakout groups addressing three main themes: i) mobilising heritage biodiversity knowledge; ii) formalising and linking concepts; and iii) addressing interoperability between service platforms. Another goal was to further foster a community of experts in biodiversity informatics and to build human links between research projects and institutions, in response to recent calls to further such integration in this research domain. Conclusions: Beyond deriving prototype solutions for each use case, areas of inadequacy were discussed and are being pursued further. It was striking how many possible applications for biodiversity data there were and how quickly solutions could be put together when the normal constraints to collaboration were broken down for a week. Conversely, mobilising biodiversity knowledge from their silos in heritage literature and natural history collections will continue to require formalisation of the concepts (and the links between them) that define the research domain, as well as increased interoperability between the software platforms that operate on these concepts

    Optimizing self-regulation of performance : is mental effort a cue?

    Get PDF
    Accurate self-regulation of performance is important for trainees. Trainees rely on cues to make monitoring judgments to self-regulate their performance. Ideally, cues and monitoring judgements accurately reflect performance, as measured by cue diagnosticity (the ability of a cue to predict performance) and monitoring accuracy (the ability of a monitoring judgement to predict performance). However, this process is far from perfect, emphasizing the need for more accurate cues and monitoring judgements. Perhaps the mental effort of a task could be a cue used to inform certainty judgements. The purpose of this study is to measure cue utilization and cue diagnosticity of mental effort and monitoring accuracy of certainty for self-regulation of performance. Focused on the task of ECG interpretation, 22 PGY 1-3 Internal Medicine residents at McMaster University provided a diagnosis for 10 ECGs, rating their level of certainty (0–100%) and mental effort (Paas scale, 1–9). 220 ECGs completed by 22 participants were analyzed using path analysis. There was a negative moderate path coefficient between certainty and mental effort (ÎČ = − 0.370, p < 0.001), reflecting cue utilization. Regarding cue diagnosticity of mental effort, this was reflected in a small negative path coefficient between mental effort and diagnostic accuracy (ÎČ = − 0.170, p = 0.013). Regarding monitoring accuracy, a moderate path coefficient was observed between certainty and diagnostic accuracy (ÎČ = 0.343, p < 0.001). Our results support mental effort as a cue and certainty as a monitoring judgement for self-regulated performance. Yet, reported correlations are not very high. Future research is needed to identify additional cues

    The critical role of direct observation in entrustment decisions

    No full text
    Background: Entrustment decisions may be retrospective (based on past experiences with a trainee) or real-time (based on direct observation). We investigated judgments of entrustment based on assessor prior knowledge of candidates and based on systematic direct observation, conducted in an objective structured clinical exam (OSCE).Methods: Sixteen faculty examiners provided 287 retrospective and real-time entrustment ratings of 16 cardiology trainees during OSCE stations in 2019 and 2020. Reliability and validity of these ratings were assessed by comparing correlations across stations as a measure of reliability, differences across postgraduate years as an index of construct validity, correlation to standardized in-training exam (ITE) as a measure of criterion validity, and reclassification of entrustment as a measure of consequential validity.Results: Both retrospective and real-time assessments were highly reliable (all intra-class correlations >0.86). Both increased with year of postgraduate training. Real-time entrustment ratings were significantly correlated with standardized ITE scores; retrospective ratings were not. Real-time ratings explained 37% (2019) and 46% (2020) of variance in examination scores vs. 21% (2019) and 7% (2020) for retrospective ratings. Direct observation resulted in a different level of entrustment compared with retrospective ratings in 44% of cases (p = 0,86). Les deux augmentaient avec le niveau de formation postdoctorale. Les Ă©valuations de la confiance en temps rĂ©el Ă©taient significativement corrĂ©lĂ©es aux scores de l’examen normalisĂ© en cours de formation; les Ă©valuations rĂ©trospectives ne l’étaient pas. Les Ă©valuations en temps rĂ©el expliquaient 37 % (2019) et 46 % (2020) de la variance des notes d’examen, contre 21 % (2019) et 7 % (2020) pour les Ă©valuations rĂ©trospectives. L’observation directe a permis de reclasser 44 % des Ă©valuations rĂ©trospectives de la confiance (p=<0,001 dans les deux cas).Conclusion : Les Ă©valuations basĂ©es sur l’observation directe contribuent de façon importante Ă  la dĂ©cision de confier une activitĂ©

    GBIF Integration of Open Data 

    No full text
    The Global Biodiversity Information Facility (GBIF) runs a global data infrastructure that integrates data from more than 1700 institutions. Combining data at this scale has been achieved by deploying open Application Programming Interfaces (API) that adhere to the open data standards provided by Biodiversity Information Standards (TDWG). In this presentation, we will provide an overview of the GBIF infrastructure and APIs and provide insight into lessons learned while operating and evolving the systems, such as long-term API stability, ease of use, and efficiency. This will include the following topics:The registry component provides RESTful APIs for managing the organizations, repositories and datasets that comprise the network and control access permissions. Stability and ease of use have been critical to this being embedded in many systems.Changes within the registry trigger data crawling processes, which connect to external systems through their APIs and deposit datasets into GBIF's central data warehouse. One challenge here relates to the consistency of data across a distributed network.Once a dataset is crawled, the data processing infrastructure organizes and enriches data using reference catalogues accessed through open APIs, such as the vocabulary server and the taxonomic backbone. Being able to process data quickly as source data and reference catalogues change is a challenge for this component.The data access APIs provide search and download services. Asynchronous APIs are required for some of these aspects, and long-term stability is a requirement for widespread adoption. Here we will talk about policies for schema evolution to avoid incompatible changes, which would cause failures in client systems.The APIs that drive the user interface have specific needs such as efficient use of the network bandwidth. We will present how we approached this, and how we are currently adopting GraphQL as the next generation of these APIs. There are several APIs that we believe are of use for the data publishing community. These include APIs that will help in data quality aspects, and new data of interest thanks to the data clustering algorithms GBIF deploys
    corecore